420 research outputs found

    STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python

    Get PDF
    We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code

    Calcium, Synaptic Plasticity and Intrinsic Homeostasis in Purkinje Neuron Models

    Get PDF
    We recently reproduced the complex electrical activity of a Purkinje cell (PC) with very different combinations of ionic channel maximum conductances, suggesting that a large parameter space is available to homeostatic mechanisms. It has been hypothesized that cytoplasmic calcium concentrations control the homeostatic activity sensors. This raises many questions for PCs since in these neurons calcium plays an important role in the induction of synaptic plasticity. To address this question, we generated 148 new PC models. In these models the somatic membrane voltages are stable, but the somatic calcium dynamics are very variable, in agreement with experimental results. Conversely, the calcium signal in spiny dendrites shows only small variability. We demonstrate that this localized control of calcium conductances preserves the induction of long-term depression for all models. We conclude that calcium is unlikely to be the sole activity-sensor in this cell but that there is a strong relationship between activity homeostasis and synaptic plasticity

    Efficient simulation of neural development using shared memory parallelization

    Get PDF
    The Neural Development Simulator, NeuroDevSim, is a Python module that simulates the most important aspects of brain development: morphological growth, migration, and pruning. It uses an agent-based modeling approach inherited from the NeuroMaC software. Each cycle has agents called fronts execute model-specific code. In the case of a growing dendritic or axonal front, this will be a choice between extension, branching, or growth termination. Somatic fronts can migrate to new positions and any front can be retracted to prune parts of neurons. Collision detection prevents new or migrating fronts from overlapping with existing ones. NeuroDevSim is a multi-core program that uses an innovative shared memory approach to achieve parallel processing without messaging. We demonstrate linear strong parallel scaling up to 96 cores for large models and have run these successfully on 128 cores. Most of the shared memory parallelism is achieved without memory locking. Instead, cores have only write privileges to private sections of arrays, while being able to read the entire shared array. Memory conflicts are avoided by a coding rule that allows only active fronts to use methods that need writing access. The exception is collision detection, which is needed to avoid the growth of physically overlapping structures. For collision detection, a memory-locking mechanism was necessary to control access to grid points that register the location of nearby fronts. A custom approach using a serialized lock broker was able to manage both read and write locking. NeuroDevSim allows easy modeling of most aspects of neural development for models simulating a few complex or thousands of simple neurons or a mixture of both.journal articl

    Efficient simulation of neural development using shared memory parallelization

    Get PDF
    The Neural Development Simulator, NeuroDevSim, is a Python module that simulates the most important aspects of brain development: morphological growth, migration, and pruning. It uses an agent-based modeling approach inherited from the NeuroMaC software. Each cycle has agents called fronts execute model-specific code. In the case of a growing dendritic or axonal front, this will be a choice between extension, branching, or growth termination. Somatic fronts can migrate to new positions and any front can be retracted to prune parts of neurons. Collision detection prevents new or migrating fronts from overlapping with existing ones. NeuroDevSim is a multi-core program that uses an innovative shared memory approach to achieve parallel processing without messaging. We demonstrate linear strong parallel scaling up to 96 cores for large models and have run these successfully on 128 cores. Most of the shared memory parallelism is achieved without memory locking. Instead, cores have only write privileges to private sections of arrays, while being able to read the entire shared array. Memory conflicts are avoided by a coding rule that allows only active fronts to use methods that need writing access. The exception is collision detection, which is needed to avoid the growth of physically overlapping structures. For collision detection, a memory-locking mechanism was necessary to control access to grid points that register the location of nearby fronts. A custom approach using a serialized lock broker was able to manage both read and write locking. NeuroDevSim allows easy modeling of most aspects of neural development for models simulating a few complex or thousands of simple neurons or a mixture of both.Code available athttps://github.com/CNS-OIST/NeuroDevSim

    Models of Purkinje cell dendritic tree selection during early cerebellar development

    Get PDF
    We investigate the relationship between primary dendrite selection of Purkinje cells and migration of their presynaptic partner granule cells during early cerebellar development. During postnatal development, each Purkinje cell grows more than three dendritic trees, from which a primary tree is selected for development, whereas the others completely retract. Experimental studies suggest that this selection process is coordinated by physical and synaptic interactions with granule cells, which undergo a massive migration at the same time. However, technical limitations hinder continuous experimental observation of multiple cell populations. To explore possible mechanisms underlying this selection process, we constructed a computational model using a new computational framework, NeuroDevSim. The study presents the first computational model that simultaneously simulates Purkinje cell growth and the dynamics of granule cell migrations during the first two postnatal weeks, allowing exploration of the role of physical and synaptic interactions upon dendritic selection. The model suggests that interaction with parallel fibers is important to establish the distinct planar morphology of Purkinje cell dendrites. Specific rules to select which dendritic trees to keep or retract result in larger winner trees with more synaptic contacts than using random selection. A rule based on afferent synaptic activity was less effective than rules based on dendritic size or numbers of synapses.journal articl

    Climbing Fibers Provide Graded Error Signals in Cerebellar Learning

    Get PDF
    The cerebellum plays a critical role in coordinating and learning complex movements. Although its importance has been well recognized, the mechanisms of learning remain hotly debated. According to the classical cerebellar learning theory, depression of parallel fiber synapses instructed by error signals from climbing fibers, drives cerebellar learning. The uniqueness of long-term depression (LTD) in cerebellar learning has been challenged by evidence showing multi-site synaptic plasticity. In Purkinje cells, long-term potentiation (LTP) of parallel fiber synapses is now well established and it can be achieved with or without climbing fiber signals, making the role of climbing fiber input more puzzling. The central question is how individual Purkinje cells extract global errors based on climbing fiber input. Previous data seemed to demonstrate that climbing fibers are inefficient instructors, because they were thought to carry “binary” error signals to individual Purkinje cells, which significantly constrains the efficiency of cerebellar learning in several regards. In recent years, new evidence has challenged the traditional view of “binary” climbing fiber responses, suggesting that climbing fibers can provide graded information to efficiently instruct individual Purkinje cells to learn. Here we review recent experimental and theoretical progress regarding modulated climbing fiber responses in Purkinje cells. Analog error signals are generated by the interaction of varying climbing fibers inputs with simultaneous other synaptic input and with firing states of targeted Purkinje cells. Accordingly, the calcium signals which trigger synaptic plasticity can be graded in both amplitude and spatial range to affect the learning rate and even learning direction. We briefly discuss how these new findings complement the learning theory and help to further our understanding of how the cerebellum works

    Recent data on the cerebellum require new models and theories

    Get PDF
    The cerebellum has been a popular topic for theoretical studies because its structure was thought to be simple. Since David Marr and James Albus related its function to motor skill learning and proposed the Marr-Albus cerebellar learning model, this theory has guided and inspired cerebellar research. In this review, we summarize the theoretical progress that has been made within this framework of error-based supervised learning. We discuss the experimental progress that demonstrates more complicated molecular and cellular mechanisms in the cerebellum as well as new cell types and recurrent connections. We also cover its involvement in diverse non-motor functions and evidence of other forms of learning. Finally, we highlight the need to explain these new experimental findings into an integrated cerebellar model that can unify its diverse computational functions.journal articl

    The Layer-Oriented Approach to Declarative Languages for Biological Modeling

    Get PDF
    We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language
    corecore